13 research outputs found
How much should you ask? On the question structure in QA systems
Datasets that boosted state-of-the-art solutions for Question Answering (QA)
systems prove that it is possible to ask questions in natural language manner.
However, users are still used to query-like systems where they type in keywords
to search for answer. In this study we validate which parts of questions are
essential for obtaining valid answer. In order to conclude that, we take
advantage of LIME - a framework that explains prediction by local
approximation. We find that grammar and natural language is disregarded by QA.
State-of-the-art model can answer properly even if 'asked' only with a few
words with high coefficients calculated with LIME. According to our knowledge,
it is the first time that QA model is being explained by LIME.Comment: Accepted to Analyzing and interpreting neural networks for NLP
workshop at EMNLP 201
Does it care what you asked? Understanding Importance of Verbs in Deep Learning QA System
In this paper we present the results of an investigation of the importance of
verbs in a deep learning QA system trained on SQuAD dataset. We show that main
verbs in questions carry little influence on the decisions made by the system -
in over 90% of researched cases swapping verbs for their antonyms did not
change system decision. We track this phenomenon down to the insides of the
net, analyzing the mechanism of self-attention and values contained in hidden
layers of RNN. Finally, we recognize the characteristics of the SQuAD dataset
as the source of the problem. Our work refers to the recently popular topic of
adversarial examples in NLP, combined with investigating deep net structure.Comment: Accepted to Analyzing and interpreting neural networks for NLP
workshop at EMNLP 201
Multi-modal Embedding Fusion-based Recommender
Recommendation systems have lately been popularized globally, with primary
use cases in online interaction systems, with significant focus on e-commerce
platforms. We have developed a machine learning-based recommendation platform,
which can be easily applied to almost any items and/or actions domain. Contrary
to existing recommendation systems, our platform supports multiple types of
interaction data with multiple modalities of metadata natively. This is
achieved through multi-modal fusion of various data representations. We
deployed the platform into multiple e-commerce stores of different kinds, e.g.
food and beverages, shoes, fashion items, telecom operators. Here, we present
our system, its flexibility and performance. We also show benchmark results on
open datasets, that significantly outperform state-of-the-art prior work.Comment: 7 pages, 8 figure
Visual probing : cognitive framework for explaining self-supervised image representations
Recently introduced self-supervised methods for image representation learning provide on par or superior results to their fully supervised competitors, yet the corresponding efforts to explain the self-supervised approaches lag behind. Motivated by this observation, we introduce a novel visual probing framework for explaining the self-supervised models by leveraging probing tasks employed previously in natural language processing. The probing tasks require knowledge about semantic relationships between image parts. Hence, we propose a systematic approach to obtain analogs of natural language in vision, such as visual words, context, and taxonomy. Our proposal is grounded in Marr鈥檚 computational theory of vision and concerns features like textures, shapes, and lines. We show the effectiveness and applicability of those analogs in the context of explaining self-supervised representations. Our key findings emphasize that relations between language and vision can serve as an effective yet intuitive tool for discovering how machine learning models work, independently of data modality. Our work opens a plethora of research pathways towards more explainable and transparent AI
Visual Probing: Cognitive Framework for Explaining Self-Supervised Image Representations
Recently introduced self-supervised methods for image representation learning
provide on par or superior results to their fully supervised competitors, yet
the corresponding efforts to explain the self-supervised approaches lag behind.
Motivated by this observation, we introduce a novel visual probing framework
for explaining the self-supervised models by leveraging probing tasks employed
previously in natural language processing. The probing tasks require knowledge
about semantic relationships between image parts. Hence, we propose a
systematic approach to obtain analogs of natural language in vision, such as
visual words, context, and taxonomy. Our proposal is grounded in Marr's
computational theory of vision and concerns features like textures, shapes, and
lines. We show the effectiveness and applicability of those analogs in the
context of explaining self-supervised representations. Our key findings
emphasize that relations between language and vision can serve as an effective
yet intuitive tool for discovering how machine learning models work,
independently of data modality. Our work opens a plethora of research pathways
towards more explainable and transparent AI